A number of recent discussions comparing computer simulation and traditional experimentation have focused on the significance of “materiality.” I challenge several claims emerging from this work and suggest that computer simulation studies are material experiments in a straightforward sense. After discussing some of the implications of this material status for the epistemology of computer simulation, I consider the extent to which materiality (in a particular sense) is important when it comes to making justified inferences about target systems on the basis (...) of experimental results. (shrink)
According to an adequacy-for-purpose view, models should be assessed with respect to their adequacy or fitness for particular purposes. Such a view has been advocated by scientists and philosophers...
This article explores some of the roles of computer simulation in measurement. A model-based view of measurement is adopted and three types of measurement—direct, derived, and complex—are distinguished. It is argued that while computer simulations on their own are not measurement processes, in principle they can be embedded in direct, derived, and complex measurement practices in such a way that simulation results constitute measurement outcomes. Atmospheric data assimilation is then considered as a case study. This practice, which involves combining information (...) from conventional observations and simulation-based forecasts, is characterized as a complex measuring practice that is still under development. The case study reveals challenges that are likely to resurface in other measuring practices that embed computer simulation. It is also noted that some practices that embed simulation are difficult to classify; they suggest a fuzzy boundary between measurement and non-measurement. 1 Introduction2 A Contemporary View of Measurement3 Three Types of Measurement4 Can Computer Simulations Measure Real-World Target Systems?5 Case Study: Atmospheric Data Assimilation5.1 Why data assimilation?5.2 A complex measuring practice under development5.3 Epistemic iteration6 The Boundaries of Measurement7 Epistemology, Not Terminology. (shrink)
This article identifies conditions under which robust predictive modeling results have special epistemic significance---related to truth, confidence, and security---and considers whether those conditions hold in the context of present-day climate modeling. The findings are disappointing. When today’s climate models agree that an interesting hypothesis about future climate change is true, it cannot be inferred---via the arguments considered here anyway---that the hypothesis is likely to be true or that scientists’ confidence in the hypothesis should be significantly increased or that a claim (...) to have evidence for the hypothesis is now more secure. (shrink)
Lloyd (2009) contends that climate models are confirmed by various instances of fit between their output and observational data. The present paper argues that what these instances of fit might confirm are not climate models themselves, but rather hypotheses about the adequacy of climate models for particular purposes. This required shift in thinking—from confirming climate models to confirming their adequacy-for-purpose—may sound trivial, but it is shown to complicate the evaluation of climate models considerably, both in principle and in practice.
Can computer simulation results be evidence for hypotheses about real-world systems and phenomena? If so, what sort of evidence? Can we gain genuinely new knowledge of the world via simulation? I argue that evidence from computer simulation is aptly characterized as higher-order evidence: it is evidence that other evidence regarding a hypothesis about the world has been collected. Insofar as particular epistemic agents do not have this other evidence, it is possible that they will gain genuinely new knowledge of the (...) world via simulation. I illustrate with examples inspired by uses of simulation in meteorology and astrophysics. (shrink)
We call attention to an underappreciated way in which non-epistemic values influence evidence evaluation in science. Our argument draws upon some well-known features of scientific modeling. We show that, when scientific models stand in for background knowledge in Bayesian and other probabilistic methods for evidence evaluation, conclusions can be influenced by the non-epistemic values that shaped the setting of priorities in model development. Moreover, it is often infeasible to correct for this influence. We further suggest that, while this value influence (...) is not particularly prone to the problem of wishful thinking, it could have problematic non-epistemic consequences in some cases. (shrink)
Today’s most sophisticated simulation studies of future climate employ not just one climate model but a number of models. I explain why this “ensemble” approach has been adopted—namely, as a means of taking account of uncertainty—and why a comprehensive investigation of uncertainty remains elusive. I then defend a middle ground between two camps in an ongoing debate over the transformation of ensemble results into probabilistic predictions of climate change, highlighting requirements that I refer to as ownership, justification, and robustness.
Simulation-based weather and climate prediction now involves the use of methods that reflect a deep concern with uncertainty. These methods, known as ensemble prediction methods, produce multiple simulations for predictive periods of interest, using different initial conditions, parameter values and/or model structures. This paper provides a non-technical overview of current ensemble methods and considers how the results of studies employing these methods should be interpreted, paying special attention to probabilistic interpretations. A key conclusion is that, while complicated inductive arguments might (...) be given for the trustworthiness of probabilistic weather forecasts obtained from ensemble studies, analogous arguments are out of reach in the case of long-term climate prediction. In light of this, the paper considers how predictive uncertainty should be conveyed to decision makers. (shrink)
This paper critically examines Weisberg’s weighted feature matching account of model-world similarity. A number of concerns are raised, including that Weisberg provides an account of what underlies scientific judgments of relative similarity, when what is desired is an account of the sorts of model-target similarities that are necessary or sufficient for achieving particular types of modeling goal. Other concerns relate to the details of the account, in particular to the content of feature sets, the nature of shared features and the (...) assumed independence of feature weightings. (shrink)
After showing how Deborah Mayo’s error-statistical philosophy of science might be applied to address important questions about the evidential status of computer simulation results, I argue that an error-statistical perspective offers an interesting new way of thinking about computer simulation models and has the potential to significantly improve the practice of simulation model evaluation. Though intended primarily as a contribution to the epistemology of simulation, the analysis also serves to fill in details of Mayo’s epistemology of experiment.
Allan Franklin has identified a number of strategies that scientists use to build confidence in experimental results. This paper shows that Franklin's strategies have direct analogues in the context of computer simulation and then suggests that one of his strategies—the so-called 'Sherlock Holmes' strategy—deserves a privileged place within the epistemologies of experiment and simulation. In particular, it is argued that while the successful application of even several of Franklin's other strategies (or their analogues in simulation) may not be sufficient for (...) justified belief in results, the successful application of a slightly elaborated version of the Sherlock Holmes strategy is sufficient. (shrink)
The theoretical foundations of climate science have received little attention from philosophers thus far, despite a number of outstanding issues. We provide a brief, non-technical overview of several of these issues – related to theorizing about climates, climate change, internal variability and more – and attempt to make preliminary progress in addressing some of them. In doing so, we hope to open a new thread of discussion in the emerging area of philosophy of climate science, focused on theoretical foundations.
Climate change fingerprint studies investigate the causes of recent climate change. I argue that these studies have much in common with Steel’s (2008) streamlined comparative process tracing, illustrating a mechanisms-based approach to extrapolation in which the mechanisms of interest are simulated rather than physically instantiated. I then explain why robustness and variety-of-evidence considerations turn out to be important for understanding the evidential value of climate change fingerprint studies.
In 1904, Norwegian physicist Vilhelm Bjerknes published what would become a landmark paper in the history of meteorology. In that paper, he proposed that daily weather forecasts could be made by calculating later states of the atmosphere from an earlier state using the laws of hydrodynamics and thermodynamics (Bjerknes 1904). He outlined a set of differential equations to be solved and advocated the development of graphical and numerical solution methods, since analytic solution was out of the question. Using these theory-based (...) equations to produce daily forecasts, however, turned out to be more difficult than anticipated. Graphical solution techniques had limited success, and a first attempt to use .. (shrink)
An uncertainty report describes the extent of an agent’s uncertainty about some matter. We identify two basic requirements for uncertainty reports, which we call faithfulness and completeness. We then discuss two pitfalls of uncertainty assessment that often result in reports that fail to meet these requirements. The first involves adopting a one-size-fits-all approach to the representation of uncertainty, while the second involves failing to take account of the risk of surprises. In connection with the latter, we respond to the objection (...) that it is impossible to account for the risk of genuine surprises. After outlining some steps that both scientists and the bodies who commission uncertainty assessments can take to help avoid these pitfalls, we explain why striving for faithfulness and completeness is important. (shrink)
Only a decade ago, the topic of scientific understanding remained one that philosophers of science largely avoided. Earlier discussions by Hempel and others had branded scientific understanding a mere subjective state or feeling, one to be studied by psychologists perhaps, but not an important or fruitful focus for philosophers of science. Even as scientific explanation became a central topic in philosophy of science, little attention was given to understanding. Over the last decade, however, this situation has changed. Analyses of scientific (...) understanding that do not treat it as a subjective state or feeling have been offered and debated, and both the epistemic value and the pitfalls of purported psychological .. (shrink)
As a device used by scientists in the course of performing research, the digital computer might be considered a scientific instrument. But if so, what is it an instrument for? This paper explores a number of answers to this question, focusing on the use of computers in a simulating mode.
This chapter identifies conditions under which robust predictive modeling results have special epistemic significance—related to truth, confidence, and security—and considers whether those conditions are met in the context of climate modeling today. The findings are disappointing. When today’s climate models agree that an interesting hypothesis about future climate change is true, it cannot be inferred, via the arguments considered here anyway, that the hypothesis is likely to be true, nor that confidence in the hypothesis should be significantly increased, nor that (...) a claim to have evidence for the hypothesis is now more secure. In some other modeling contexts, the prospects for such arguments are brighter. (shrink)
Computer simulation modeling is an important part of contemporary scientific practice but has not yet received much attention from philosophers. The present project helps to fill this lacuna in the philosophical literature by addressing three questions that arise in the context of computer simulation of Earth's climate. Computer simulation experimentation commonly is viewed as a suspect methodology, in contrast to the trusted mainstay of material experimentation. Are the results of computer simulation experiments somehow deeply problematic in ways that the results (...) of material experiments are not? I argue against categorical skepticism toward the results of computer simulation experiments by revealing important parallels in the epistemologies of material and computer simulation experimentation. It has often been remarked that simple computer simulation models---but not complex ones---contribute substantially to our understanding of the atmosphere and climate system. Is this view of the relative contributions of simple and complex models tenable? I show that both simple and complex climate models can promote scientific understanding and argue that the apparent contribution of simple models depends upon whether a causal or deductive account of scientific understanding is adopted. When two incompatible scientific theories are under consideration, they typically are viewed as competitors, and we seek evidence that refutes at least one of the theories. In the study of climate change, however, logically incompatible computer simulation models are accepted as complementary resources for investigating future climate. How can we make sense of this use of incompatible models? I show that a collection of incompatible climate models persists in part because of difficulties faced in evaluating and comparing climate models. I then discuss the rationale for using these incompatible models together and argue that this climate model pluralism has both competitive and integrative components. (shrink)
Computer simulation and philosophy of science Content Type Journal Article Pages 1-4 DOI 10.1007/s11016-011-9567-8 Authors Wendy S. Parker, Department of Philosophy, Ellis Hall 202, Ohio University, Athens, OH 45701, USA Journal Metascience Online ISSN 1467-9981 Print ISSN 0815-0796.